library(tidyverse) # for graphing and data cleaning
library(googlesheets4) # for reading googlesheet data
library(lubridate) # for date manipulation
library(ggthemes) # for even more plotting themes
library(geofacet) # for special faceting with US map layout
gs4_deauth() # To not have to authorize each time you knit.
theme_set(theme_minimal()) # My favorite ggplot() theme :)
#Lisa's garden data
garden_harvest <- read_sheet("https://docs.google.com/spreadsheets/d/1DekSazCzKqPS2jnGhKue7tLxRU3GVL1oxi-4bEM5IWw/edit?usp=sharing") %>%
mutate(date = ymd(date))
# Seeds/plants (and other garden supply) costs
supply_costs <- read_sheet("https://docs.google.com/spreadsheets/d/1dPVHwZgR9BxpigbHLnA0U99TtVHHQtUzNB9UR0wvb7o/edit?usp=sharing",
col_types = "ccccnn")
# Planting dates and locations
plant_date_loc <- read_sheet("https://docs.google.com/spreadsheets/d/11YH0NtXQTncQbUse5wOsTtLSKAiNogjUA21jnX5Pnl4/edit?usp=sharing",
col_types = "cccnDlc")%>%
mutate(date = ymd(date))
# Tidy Tuesday data
kids <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-09-15/kids.csv')
Before starting your assignment, you need to get yourself set up on GitHub and make sure GitHub is connected to R Studio. To do that, you should read the instruction (through the “Cloning a repo” section) and watch the video here. Then, do the following (if you get stuck on a step, don’t worry, I will help! You can always get started on the homework and we can figure out the GitHub piece later):
keep_md: TRUE in the YAML heading. The .md file is a markdown (NOT R Markdown) file that is an interim step to creating the html file. They are displayed fairly nicely in GitHub, so we want to keep it and look at it there. Click the boxes next to these two files, commit changes (remember to include a commit message), and push them (green up arrow).Put your name at the top of the document.
For ALL graphs, you should include appropriate labels.
Feel free to change the default theme, which I currently have set to theme_minimal().
Use good coding practice. Read the short sections on good code with pipes and ggplot2. This is part of your grade!
When you are finished with ALL the exercises, uncomment the options at the top so your document looks nicer. Don’t do it before then, or else you might miss some important warnings and messages.
These exercises will reiterate what you learned in the “Expanding the data wrangling toolkit” tutorial. If you haven’t gone through the tutorial yet, you should do that first.
garden_harvest data to find the total harvest weight in pounds for each vegetable and day of week. Display the results so that the vegetables are rows but the days of the week are columns.garden_harvest
garden_harvest %>%
mutate(day_of_week = weekdays(date)) %>%
group_by(vegetable, day_of_week) %>%
summarize(total_wt_lbs = (sum(weight))/454) %>%
pivot_wider(names_from = day_of_week,
values_from = total_wt_lbs)
garden_harvest data to find the total harvest in pounds for each vegetable variety and then try adding the plot variable from the plant_date_loc table. This will not turn out perfectly. What is the problem? How might you fix it?garden_harvest %>%
group_by(vegetable, variety) %>%
summarize(total_harvest = sum(weight)/454) %>%
left_join(plant_date_loc,
by = "variety")
There is no way to discern which vegetables came from which plot on which date. We would have to assign plots for each date, but it would not be true to the data.
garden_harvest and supply_cost datasets, along with data from somewhere like this to answer this question. You can answer this in words, referencing various join functions. You don’t need R code but could provide some if it’s helpful.You could create your own dataset about the unit cost of each vegetable from Whole Foods and compare it to the unit cost of supplies for the garden and cross-reference that cost different to the volume of the harvest. To do this, you would need to perform a left join of the dataset we create from the Whole Foods information to the supply_cost dataset, and then another to join that dataset with the garden_harvest set.
garden_harvest %>%
filter(vegetable == "tomatoes") %>%
mutate(variety = fct_reorder(variety, date)) %>%
group_by(variety) %>%
summarize(total_wt_lbs = sum(weight)/454) %>%
ggplot(aes(x = variety, y = total_wt_lbs)) +
geom_col()
garden_harvest data, create two new variables: one that makes the varieties lowercase and another that finds the length of the variety name. Arrange the data by vegetable and length of variety name (smallest to largest), with one row for each vegetable variety. HINT: use str_to_lower(), str_length(), and distinct().garden_harvest %>%
mutate(lowvar = str_to_lower(variety),
lenvar = str_length(variety)) %>%
group_by(vegetable) %>%
arrange(-desc(lenvar), .by_group = TRUE) %>%
distinct(variety)
garden_harvest data, find all distinct vegetable varieties that have “er” or “ar” in their name. HINT: str_detect() with an “or” statement (use the | for “or”) and distinct().garden_harvest %>%
distinct(variety) %>%
mutate(haserar = str_detect(variety, "er|ar"))
In this activity, you’ll examine some factors that may influence the use of bicycles in a bike-renting program. The data come from Washington, DC and cover the last quarter of 2014.
{300px}
{300px}
Two data tables are available:
Trips contains records of individual rentalsStations gives the locations of the bike rental stationsHere is the code to read in the data. We do this a little differently than usualy, which is why it is included here rather than at the top of this file. To avoid repeatedly re-reading the files, start the data import chunk with {r cache = TRUE} rather than the usual {r}.
data_site <-
"https://www.macalester.edu/~dshuman1/data/112/2014-Q4-Trips-History-Data.rds"
Trips <- readRDS(gzcon(url(data_site)))
Stations<-read_csv("http://www.macalester.edu/~dshuman1/data/112/DC-Stations.csv")
NOTE: The Trips data table is a random subset of 10,000 trips from the full quarterly data. Start with this small data table to develop your analysis commands. When you have this working well, you should access the full data set of more than 600,000 events by removing -Small from the name of the data_site.
It’s natural to expect that bikes are rented more at some times of day, some days of the week, some months of the year than others. The variable sdate gives the time (including the date) that the rental started. Make the following plots and interpret them:
sdate. Use geom_density().Trips %>%
ggplot(aes(x = sdate)) +
geom_density() +
labs(title = "Bike Sharing Over Time",
x = "Date",
y = "Density")
Bike sharing is highest earlier in the season, when it is warmer. It slightly peaks around Christmas, when there is likely more tourism.
mutate() with lubridate’s hour() and minute() functions to extract the hour of the day and minute within the hour from sdate. Hint: A minute is 1/60 of an hour, so create a variable where 3:30 is 3.5 and 3:45 is 3.75.Trips %>%
mutate(hour = hour(sdate),
minute = minute(sdate)/60,
time = hour + minute) %>%
ggplot(aes(x = time)) +
geom_density() +
labs(title = "Bike Sharing Througout the Day",
x = "Hour (in 24 hour time)",
y = "Density")
Bike Sharing has peaks around rush hour, which is likely people using bikes to get to and from work. There is also a peak around the lunch hour.
Trips %>%
mutate(weekday = weekdays(sdate)) %>%
ggplot(aes(y = weekday)) +
geom_bar() +
labs(title = "Number of Bikes Rented by Day",
x = "Count",
y = "Weekday")
Bike sharing is highest during the week, which is also likely because of commuters for work.
Trips %>%
mutate(hour = hour(sdate),
minute = minute(sdate)/60,
time = hour + minute,
weekday = weekdays(sdate)) %>%
ggplot(aes(x = time)) +
geom_density() +
facet_wrap(~weekday) +
labs(title = "Bike Sharing Througout the Week",
x = "Hours (in 24 hour time)",
y = "Density")
During the week, the distributions are bimodal, with peaks during rush hour due to routine work users. During the weekends, the distributions are unimodal, as these are likely mostly pleasure users, who use them more evenly throughout the day.
The variable client describes whether the renter is a regular user (level Registered) or has not joined the bike-rental organization (Causal). The next set of exercises investigate whether these two different categories of users show different rental behavior and how client interacts with the patterns you found in the previous exercises. Repeat the graphic from Exercise @ref(exr:exr-temp) (d) with the following changes:
fill aesthetic for geom_density() to the client variable. You should also set alpha = .5 for transparency and color=NA to suppress the outline of the density function.Trips %>%
mutate(hour = hour(sdate),
minute = minute(sdate)/60,
time = hour + minute,
weekday = weekdays(sdate)) %>%
ggplot(aes(x = time,
fill = client,
alpha = 0.5)) +
geom_density(color = NA) +
facet_wrap(~weekday)
position = position_stack() to geom_density(). In your opinion, is this better or worse in terms of telling a story? What are the advantages/disadvantages of each?Trips %>%
mutate(hour = hour(sdate),
minute = minute(sdate)/60,
time = hour + minute,
weekday = wday(sdate, label = TRUE)) %>%
ggplot(aes(x = time,
fill = client,
alpha = 0.5)) +
geom_density(color = NA,
position = position_stack()) +
facet_wrap(~weekday)
This visualization is much more confusing in my opinion. Adding together the density of the two options makes no sense in this context. To better directly compare the density of the usage during the week and weekend, it is better to use the first graph. I am struggling to think of any advantages that the second graph has over the first one in this case.
weekend which will be “weekend” if the day is Saturday or Sunday and “weekday” otherwise (HINT: use the ifelse() function and the wday() function from lubridate). Then, update the graph from the previous problem by faceting on the new weekend variable.Trips %>%
mutate(hour = hour(sdate),
minute = minute(sdate)/60,
time = hour + minute,
weekdays = wday(sdate, label = TRUE),
weekend = ifelse(weekdays %in% c("Sat", "Sun"), "weekend", "weekday")) %>%
ggplot(aes(x = time,
fill = client)) +
geom_density(color = NA,
position = position_stack(),
alpha = 0.5) +
facet_wrap(~weekend)
client and fill with weekday. What information does this graph tell you that the previous didn’t? Is one graph better than the other?Trips %>%
mutate(hour = hour(sdate),
minute = minute(sdate)/60,
time = hour + minute,
weekdays = wday(sdate, label = TRUE),
weekend = ifelse(weekdays %in% c("Sat", "Sun"), "weekend", "weekday")) %>%
ggplot(aes(x = time,
fill = weekend)) +
geom_density(color = NA,
position = position_stack(),
alpha = 0.5) +
facet_wrap(~client)
This graph makes it easy to see that casual users always use the bikes most in the middle of the day and exhibit a unimodal distribution. I wouldn’t say that one graph is better than the other, as they both generally display the same information and emphasize different parts of it.
Stations to make a visualization of the total number of departures from each station in the Trips data. Use either color or size to show the variation in number of departures. We will improve this plot next week when we learn about maps!Trips %>%
left_join(Stations,
by = c("sstation" = "name")) %>%
group_by(lat, long) %>%
summarize(total_deps = n()) %>%
ggplot(aes(x = lat,
y = long,
size = total_deps)) +
geom_point() +
labs(title = "Bike Departures by Location",
x = "Latitude",
y = "Longitude",
size = "Total Departures")
Trips %>%
left_join(Stations,
by = c("sstation" = "name")) %>%
group_by(lat, long, client) %>%
summarize(total_deps = n()) %>%
mutate(casprop = mean(client == "Casual")) %>%
ggplot(aes(x = lat,
y = long,
size = total_deps,
color= casprop)) +
geom_point() +
labs(title = "Number and Nature of Bike Departures by Location",
x = "Latitude",
y = "Longitude",
size = "Total Departures",
color = "Proportion of Users that Are Casual Users")
as_date(sdate) converts sdate from date-time format to date format.Trips2 <- Trips %>%
mutate(date = as_date(sdate)) %>%
group_by(sstation, date) %>%
count() %>%
arrange(desc(n)) %>%
head(n = 10)
Trips2
Trips %>%
mutate(date = as_date(sdate)) %>%
semi_join(Trips2,
by = c("sstation", "date"))
Trips %>%
mutate(date = as_date(sdate)) %>%
semi_join(Trips2,
by = c("sstation", "date")) %>%
mutate(weekdays = wday(sdate, label = TRUE)) %>%
group_by(client) %>%
mutate(total_trips = n())%>%
ungroup() %>%
group_by(client, weekdays) %>%
mutate(client_trips = n(),
prop = client_trips/total_trips)
Regular users are more likely to use the bikes during the week, while there are many more non-regular users during the weekend. This backs up what I had previously predicted.
DID YOU REMEMBER TO GO BACK AND CHANGE THIS SET OF EXERCISES TO THE LARGER DATASET? IF NOT, DO THAT NOW.
This problem uses the data from the Tidy Tuesday competition this week, kids. If you need to refresh your memory on the data, read about it here.
facet_geo(). The graphic won’t load below since it came from a location on my computer. So, you’ll have to reference the original html on the moodle page to see it.kids %>%
filter(year == 1997|year == 2016) %>%
filter(variable == "lib") %>%
mutate(lib_spending = round(inf_adj_perchild*10000)) %>%
ggplot(aes(x = year, y = lib_spending)) +
geom_line() +
facet_geo(~state) +
labs(title = "Change in Library Spending Per Child, 1997-2016",
y = "Dollars Spent Per Child",
x = "Year")
DID YOU REMEMBER TO UNCOMMENT THE OPTIONS AT THE TOP?